120 research outputs found

    Training Framework of Robotic Operation and Image Analysis for Decision-Making in Bridge Inspection and Preservation

    Get PDF
    Inspection and preservation of existing transportation infrastructure to extend their service life is an effective way of mitigating the pressure of steadily growing transportation demands on the aging infrastructure. Their current practice, though, represents one of the most costly operations in state departments of transportation. The INSPIRE University Transportation Center will develop a remotely-controlled robotic platform that helps with these labor-intensive tasks and allows engineers to focus on decision-making processes. An important mission of INSPIRE is to leverage users’ capability of implementing, and interacting with, the robotic platform. Therefore, a long-term plan has been made to create a framework of training engineers and policy makers as well as new workforce on robotic operation and image analysis for the inspection and maintenance of transportation infrastructure. The proposed project, as a component of the plan, involves the prototyping of such a framework based on camera-based bridge inspection and robot-based maintenance. The overall goal of the project is to create a framework of training engineers and policy makers on robotic operation and image analysis for the inspection and preservation of transportation infrastructure. Specifically, the project aims to (1) provide the method for collecting camera-based bridge inspection data and the algorithms for data processing and pattern recognitions; and (2) create tools for assisting and training users on visually analyzing the processed image data and recognized patterns for inspection and preservation decision-making

    Human-Robot Collaboration for Effective Bridge Inspection in the Artificial Intelligence Era

    Get PDF
    Advancements in sensor, Artificial Intelligence (AI), and robotic technologies have formed a foundation to enable a transformation from traditional engineering systems to complex adaptive systems. This paradigm shift will bring exciting changes to civil infrastructure systems and their builders, operators and managers. Funded by the INSPIRE University Transportation Center (UTC), Dr. Qin’s group investigated the holism of an AI-robot-inspector system for bridge inspection. Dr. Qin will discuss the need for close collaboration among the constituent components of the AI-robot-inspector system. In the workplace of bridge inspection using drones, the mobile robotic inspection platform rapidly collected big inspection video data that need to be processed prior to element-level inspections. She will illustrate how human intelligence and artificial intelligence can collaborate in creating an AI model both efficiently and effectively. Obtaining a large amount of expert-annotated data for model training is less desirable, if not unrealistic, in bridge inspection. This INSPIRE project addressed this annotation challenge by developing a semi-supervised self-learning (S3T) algorithm that utilizes a small amount of time and guidance from inspectors to help the model achieve an excellent performance. The project evaluated the improvement in job efficacy produced by the developed AI model. This presentation will conclude by introducing some of the on-going work to achieve the desired adaptability of AI models to new or revised tasks in bridge inspection as the National Bridge Inventory includes over 600,000 bridges of various types in material, shape, and age

    A Training Framework of Robotic Operation and Image Analysis for Decision-Making in Bridge Inspection and Preservation

    Get PDF
    Inspection and preservation of existing transportation infrastructure to extend their service life is an effective way of mitigating the pressure of steadily growing transportation demands on the aging infrastructure. Their current practice, though, represents one of the most costly operations in state departments of transportation. The INSPIRE University Transportation Center will develop a remotely-controlled robotic platform that helps with these labor-intensive tasks and allows engineers to focus on decision-making processes. An important mission of INSPIRE is to leverage users’ capability of implementing, and interacting with, the robotic platform. Therefore, a long-term plan has been made to create a framework of training engineers and policy makers as well as new workforce on robotic operation and image analysis for the inspection and maintenance of transportation infrastructure. The proposed project, as a component of the plan, involves the prototyping of such a framework based on camera-based bridge inspection and robot-based maintenance. The overall goal of the project is to create a framework of training engineers and policy makers on robotic operation and image analysis for the inspection and preservation of transportation infrastructure. Specifically, the project aims to (1) provide the method for collecting camera-based bridge inspection data and the algorithms for data processing and pattern recognitions; and (2) create tools for assisting and training users on visually analyzing the processed image data and recognized patterns for inspection and preservation decision-making

    Attaining Knowledge Workforce Agility in a Product Life Cycle Environment using Real Options

    Get PDF
    The product life cycle (PLC) phenomenon has placed significant pressures on high-tech industries which rely heavily on the knowledge workforce in transferring cutting-edge technologies into products. This thesis examines systems where market changes and production technology advances happen frequently and unpredictably during the PLC, causing difficulties in predicting an appropriate demand on the knowledge workforce and in maintaining reliable performance. Knowledge workforce agility (KWA) is identified as a desirable means for addressing the difficulties, and yet previous work on KWA is incomplete. This thesis accomplishes several critical tasks for realizing the benefits of KWA in a representative PLC environment, semiconductor manufacturing. Real options (RO) is chosen as the approach towards exploiting KWA, since RO captures the essence of KWA-options in manipulating knowledge capacity, a human asset, or a self-cultivated organizational capability for pursuing interests associated with change. Accordingly, market demand change and workforce knowledge (WK) dynamics in adoption of technology advances are formulized as underlying stochastic processes during the PLC. This thesis models KWA as capacity options in a knowledge workforce and develops a RO approach of workforce training, either initial or continuous, for generating options. To quantify the elements of KWA that impact production, the role of the knowledge workforce in production and costs in obtaining KWA are characterized mathematically. It creates necessary RO valuation methods and techniques to optimize KWA. An analytical examination of the PLC models identifies that KWA has potential to reduce negative impacts and generate opportunities in an environment of volatile demand, and to compensate unreliable performance of knowledge workforce in adoption of technology advances. The benefits of KWA are especially important when confronting highly volatile demand, a low initial adoption level, shrinking PLCs, a growing market size, intense and frequent WK dynamics, insufficient learning capability of employees, or diminishing returns from investments in learning. The thesis further assesses RO, as an agility-driven approach, by comparing it to a chase-demand heuristic and to the Bass forecasting model under demand uncertainty. The assessment demonstrates that the KWA attained from the RO approach, termed RO-based KWA, leads to a stably higher yield, to a persistently larger net present value (NPV), and to a NPV distribution that is more robust to highly volatile demand. Subsequently, a quantitative evaluation of KWA value shows that the RO-based KWA creates a considerable profit growth, either with uncertainty in demand or in the WK dynamics. In evaluation, RO modeling and the RO valuation are identified to be useful in creation of KWA value especially in highly uncertain PLC environments. This thesis illustrates the effectiveness of the numerical methods used for solving the dynamic system problem. This research demonstrates an approach for optimizing KWA in PLC environments using RO. It provides an innovative solution for knowledge workforce planning in rapidly changing and highly unexpected environments. The work of this thesis is representative of studying KWA using quantitative techniques, where there is a dearth of quantitative studies in the literature

    Fusion-GRU: A Deep Learning Model for Future Bounding Box Prediction of Traffic Agents in Risky Driving Videos

    Full text link
    To ensure the safe and efficient navigation of autonomous vehicles and advanced driving assistance systems in complex traffic scenarios, predicting the future bounding boxes of surrounding traffic agents is crucial. However, simultaneously predicting the future location and scale of target traffic agents from the egocentric view poses challenges due to the vehicle's egomotion causing considerable field-of-view changes. Moreover, in anomalous or risky situations, tracking loss or abrupt motion changes limit the available observation time, requiring learning of cues within a short time window. Existing methods typically use a simple concatenation operation to combine different cues, overlooking their dynamics over time. To address this, this paper introduces the Fusion-Gated Recurrent Unit (Fusion-GRU) network, a novel encoder-decoder architecture for future bounding box localization. Unlike traditional GRUs, Fusion-GRU accounts for mutual and complex interactions among input features. Moreover, an intermediary estimator coupled with a self-attention aggregation layer is also introduced to learn sequential dependencies for long range prediction. Finally, a GRU decoder is employed to predict the future bounding boxes. The proposed method is evaluated on two publicly available datasets, ROL and HEV-I. The experimental results showcase the promising performance of the Fusion-GRU, demonstrating its effectiveness in predicting future bounding boxes of traffic agents

    A Multitask Deep Learning Model for Parsing Bridge Elements and Segmenting Defect in Bridge Inspection Images

    Full text link
    The vast network of bridges in the United States raises a high requirement for maintenance and rehabilitation. The massive cost of manual visual inspection to assess bridge conditions is a burden to some extent. Advanced robots have been leveraged to automate inspection data collection. Automating the segmentations of multiclass elements and surface defects on the elements in the large volume of inspection image data would facilitate an efficient and effective assessment of the bridge condition. Training separate single-task networks for element parsing (i.e., semantic segmentation of multiclass elements) and defect segmentation fails to incorporate the close connection between these two tasks. Both recognizable structural elements and apparent surface defects are present in the inspection images. This paper is motivated to develop a multitask deep learning model that fully utilizes such interdependence between bridge elements and defects to boost the model's task performance and generalization. Furthermore, the study investigated the effectiveness of the proposed model designs for improving task performance, including feature decomposition, cross-talk sharing, and multi-objective loss function. A dataset with pixel-level labels of bridge elements and corrosion was developed for model training and testing. Quantitative and qualitative results from evaluating the developed multitask deep model demonstrate its advantages over the single-task-based model not only in performance (2.59% higher mIoU on bridge parsing and 1.65% on corrosion segmentation) but also in computational time and implementation capability.Comment: Accepted for presentation at the 2023 TRB Annual Meeting and publication in the Transportation Research Record: Journal of the Transportation Research Board (TRR

    An Attention-guided Multistream Feature Fusion Network for Localization of Risky Objects in Driving Videos

    Full text link
    Detecting dangerous traffic agents in videos captured by vehicle-mounted dashboard cameras (dashcams) is essential to facilitate safe navigation in a complex environment. Accident-related videos are just a minor portion of the driving video big data, and the transient pre-accident processes are highly dynamic and complex. Besides, risky and non-risky traffic agents can be similar in their appearance. These make risky object localization in the driving video particularly challenging. To this end, this paper proposes an attention-guided multistream feature fusion network (AM-Net) to localize dangerous traffic agents from dashcam videos. Two Gated Recurrent Unit (GRU) networks use object bounding box and optical flow features extracted from consecutive video frames to capture spatio-temporal cues for distinguishing dangerous traffic agents. An attention module coupled with the GRUs learns to attend to the traffic agents relevant to an accident. Fusing the two streams of features, AM-Net predicts the riskiness scores of traffic agents in the video. In supporting this study, the paper also introduces a benchmark dataset called Risky Object Localization (ROL). The dataset contains spatial, temporal, and categorical annotations with the accident, object, and scene-level attributes. The proposed AM-Net achieves a promising performance of 85.73% AUC on the ROL dataset. Meanwhile, the AM-Net outperforms current state-of-the-art for video anomaly detection by 6.3% AUC on the DoTA dataset. A thorough ablation study further reveals AM-Net's merits by evaluating the contributions of its different components.Comment: Submitted to IEEE-T-IT

    Image Data Analytics to Support Engineers’ Decision-Making

    Get PDF
    Robots such as drones have been leveraged to perform structure health inspection such as bridge inspection. Big data of inspection videos can be collected by cameras mounted on drones. In this project, we develop image analysis algorithms to support bridge engineers to analyze the big video data. Bridge engineers define the region of interest initially, then the algorithm retrieves all related regions in the video, which facilitates the engineers to inspect the bridge rather than exhaustively check every frame of the video. To perform this task, we propose a Multi-scale Siamese Neural Network. The network is initially trained by one-shot learning and is fine-tuned iteratively with human in the loop. Our neural network is evaluated on three bridge inspection videos with promising performances

    Modeling and Simulation of a Robotic Bridge Inspection System

    Get PDF
    Inspection and preservation of the aging bridges to extend their service life has been recognized as one of the important tasks of the State Departments of Transportation. Yet manual inspection procedure is not efficient to determine the safety status of the bridges in order to facilitate the implementation of appropriate maintenance. In this paper, a complex model involving a remotely controlled robotic platform is proposed to inspect the safety status of the bridges which will eliminate labor-intensive inspection. Mobile cameras from unmanned airborne vehicles (UAV) are used to collect bridge inspection data in order to record the periodic changes of bridge components. All the UAVs are controlled via a control station and continuously feed image data to a deep learning-based detection algorithm to analyze the data to detect critical structural components. A cellular automata-based pattern recognition algorithm is used to find the pattern of structural damage. A simulation model is developed to validate the proposed method by knowing the frequency and time required for each task involved in bridge inspection and maintenance. The effectiveness of the model is demonstrated by simulating the bridge inspection and maintenance with the proposed model for five years in AnyLogic. The simulated result shows around 80% of man-hour can be saved with the proposed approach

    Vision Sensor based Action Recognition for Improving Efficiency and Quality under the Environment of Industry 4.0

    Get PDF
    In the environment of industry 4.0, human beings are still an important influencing factor of efficiency and quality which are the core of product life cycle management. Hence, monitoring and analyzing humans\u27 actions are essential. This paper proposes a vision sensor based method to evaluate the accuracy of operators\u27 actions. Each action of operators is recognized in real time by a Convolutional Neural Network (CNN) based classification model in which hierarchical clustering is introduced to minimize the effects of action uncertainty. Warnings are triggered when incorrect actions occur in real time and applications of action analysis of workers on a reducer assembling line show the effectiveness of the proposed method. The research is expected to provide a guidance for operators to correct their actions to reduce the cost of quality defects and improve the efficiency of workforce
    • …
    corecore